AI in Modern Warfare: Targeting Errors and Accountability Gaps
The first 24 hours of the conflict saw US forces strike approximately 1,000 sites, averaging 42 per hour. A CSET report revealed that the Maven Smart System enabled a 20-person team to perform tasks previously requiring 2,000 personnel at the Combined Air Operations Center in Iraq. By late 2024, the US integrated a large language model—similar to those powering consumer AI chatbots—into Maven, marking one of the earliest uses of such technology in targeting.
Questions linger about the Minab strike, with the US military yet to clarify the AI system's role, if any, in the missile launch. Early reports from The New York Times suggested the system might have relied on outdated data. Satellite imagery freely available online showed a school with a sports field at one of the targeted sites, raising concerns about the accuracy of intelligence.
Research by computer scientist Anh Totti Nguyen highlights vulnerabilities in AI vision systems, particularly when distinguishing between closely situated structures. Satellite images from The New York Times depict the Shajarah Tayyebeh elementary school adjacent to an IRGC facility, underscoring the potential for catastrophic errors in AI-driven targeting.